Deep residual learning (ResNet) is a new method for training very deep neuralnetworks using identity map-ping for shortcut connections. ResNet has won theImageNet ILSVRC 2015 classification task, and achieved state-of-the-artperformances in many computer vision tasks. However, the effect of residuallearning on noisy natural language processing tasks is still not wellunderstood. In this paper, we design a novel convolutional neural network (CNN)with residual learning, and investigate its impacts on the task of distantlysupervised noisy relation extraction. In contradictory to popular beliefs thatResNet only works well for very deep networks, we found that even with 9 layersof CNNs, using identity mapping could significantly improve the performance fordistantly-supervised relation extraction.
展开▼